- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0004000000000000
- More
- Availability
-
40
- Author / Contributor
- Filter by Author / Creator
-
-
Wu, Zhaofeng (4)
-
Pappas, Nikolaos (3)
-
Peng, Hao (3)
-
Smith, Noah A. (3)
-
Kasai, Jungo (1)
-
Kim, Yoon (1)
-
Kong, Lingpeng (1)
-
Linzen, Tal (1)
-
Merrill, William (1)
-
Naka, Norihito (1)
-
Schwartz, Roy (1)
-
Yogatama, Dani (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Do LMs infer the semantics of text from co-occurrence patterns in their training data? Merrill et al. (2022) argue that, in theory, sentence co-occurrence probabilities predicted by an optimal LM should reflect the entailment relationship of the constituent sentences, but it is unclear whether probabilities predicted by neural LMs encode entailment in this way because of strong assumptions made by Merrill et al. (namely, that humans always avoid redundancy). In this work, we investigate whether their theory can be used to decode entailment relations from neural LMs. We find that a test similar to theirs can decode entailment relations between natural sentences, well above random chance, though not perfectly, across many datasets and LMs. This suggests LMs implicitly model aspects of semantics to predict semantic effects on sentence co-occurrence patterns. However, we find the test that predicts entailment in practice works in the opposite direction to the theoretical test. We thus revisit the assumptions underlying the original test, finding its derivation did not adequately account for redundancy in human-written text. We argue that better accounting for redundancy related to explanations might derive the observed flipped test and, more generally, improve computational models of speakers in linguistics.more » « less
-
Wu, Zhaofeng; Peng, Hao; Pappas, Nikolaos; Smith, Noah A. (, Findings of the Association for Computational Linguistics: EMNLP 2022)
-
Wu, Zhaofeng; Peng, Hao; Pappas, Nikolaos; Smith, Noah A. (, Findings of the Conference on Empirical Methods on Natural Language Processing)
-
Peng, Hao; Kasai, Jungo; Pappas, Nikolaos; Yogatama, Dani; Wu, Zhaofeng; Kong, Lingpeng; Schwartz, Roy; Smith, Noah A. (, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics)
An official website of the United States government

Full Text Available